Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 28.591
2.
BMC Med Educ ; 24(1): 488, 2024 May 09.
Article En | MEDLINE | ID: mdl-38724939

BACKGROUND: Performing CPR (Cardiopulmonary Resuscitation) is an extremely intricate skill whose success depends largely on the level of knowledge and skill of Anesthesiology students. Therefore, this research was conducted to compare the effect of the scenario-based training method as opposed to video training method on nurse anesthesia students' BLS (Basic Life Support) knowledge and skills. METHODS: This randomized quasi-experimental study involved 45 nurse anesthesia students of Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran in 2022-2023. The practical room of the university formed the research environment. The participants were randomly divided into three groups of scenario-based training, video training, and control. Data were collected by a knowledge questionnaire and a BLS skill assessment checklist before and after the intervention. RESULTS: There was a significant difference between the students' scores of BLS knowledge and skill before and after the educational intervention in both SG (scenario group) (p < 0.001) and VG (video group) (p = 0.008) (p < 0.001). However, no significant difference was observed in this regard in the CG (control group) (p = 0.37) (p = 0.16). Also, the mean scores of BLS knowledge and skills in the SG were higher than those in the VG (p < 0.001). CONCLUSION: Given the beneficial impact of scenario-based education on fostering active participation, critical thinking, utilization of intellectual abilities, and learner creativity, it appears that this approach holds an advantage over video training, particularly when it comes to teaching crucial subjects like Basic Life Support.


Cardiopulmonary Resuscitation , Clinical Competence , Students, Nursing , Humans , Cardiopulmonary Resuscitation/education , Male , Female , Iran , Nurse Anesthetists/education , Educational Measurement , Video Recording , Young Adult , Adult
3.
PLoS One ; 19(5): e0303180, 2024.
Article En | MEDLINE | ID: mdl-38728283

Street View Images (SVI) are a common source of valuable data for researchers. Researchers have used SVI data for estimating pedestrian volumes, demographic surveillance, and to better understand built and natural environments in cityscapes. However, the most common source of publicly available SVI data is Google Street View. Google Street View images are collected infrequently, making temporal analysis challenging, especially in low population density areas. Our main contribution is the development of an open-source data pipeline for processing 360-degree video recorded from a car-mounted camera. The video data is used to generate SVIs, which then can be used as an input for longitudinal analysis. We demonstrate the use of the pipeline by collecting an SVI dataset over a 38-month longitudinal survey of Seattle, WA, USA during the COVID-19 pandemic. The output of our pipeline is validated through statistical analyses of pedestrian traffic in the images. We confirm known results in the literature and provide new insights into outdoor pedestrian traffic patterns. This study demonstrates the feasibility and value of collecting and using SVI for research purposes beyond what is possible with currently available SVI data. Our methods and dataset represent a first of its kind longitudinal collection and application of SVI data for research purposes. Limitations and future improvements to the data pipeline and case study are also discussed.


COVID-19 , COVID-19/epidemiology , Humans , Pandemics , SARS-CoV-2/isolation & purification , Washington/epidemiology , Longitudinal Studies , Pedestrians , Video Recording
4.
J Prev Med Hyg ; 65(1): E25-E35, 2024 Mar.
Article En | MEDLINE | ID: mdl-38706763

Background: Tobacco use and exposure are leading causes of morbidity and mortality worldwide. In the past decade, educational efforts to reduce tobacco use and exposure have extended to social media, including video-sharing platforms. YouTube is one of the most publicly accessed video-sharing platforms. Purpose: This cross-sectional descriptive study was conducted to identify and describe sources, formats, and content of widely viewed YouTube videos on smoking cessation. Methods: In August to September 2023, the keywords "stop quit smoking" were used to search in YouTube and identify 100 videos with the highest view count. Results: Collectively, these videos were viewed over 220 million times. The majority (n = 35) were posted by nongovernmental/organization sources, with a smaller number posted by consumers (n = 25), and only eleven were posted by governmental agencies. The format used in the highest number of videos was the testimonial (n = 32 videos, over 77 million views). Other popular formats included animation (n = 23 videos, over 90 million views) and talk by professional (n = 20 videos, almost 43 million views). Video content included evidence-based and non-evidence-based practices. Evidence-based strategies aligned with U.S. Public Health Service Tobacco Treatment Guidelines (e.g. health systems approach in tobacco treatment, medication management). Non-evidence-based strategies included mindfulness and hypnotherapy. One key finding was that environmental tobacco exposure received scant coverage across the videos. Conclusions: Social media such as YouTube promises to reach large audiences at low cost without requiring high reading literacy. Additional attention is needed to create videos with up-to-date, accurate information that can engage consumers.


Smoking Cessation , Social Media , Humans , Cross-Sectional Studies , Smoking Cessation/methods , Video Recording , Tobacco Use Cessation/methods
5.
JAMA Netw Open ; 7(5): e2411512, 2024 May 01.
Article En | MEDLINE | ID: mdl-38748425

This cross-sectional study assesses patient preferences for various visual backgrounds during telemedicine video visits.


Patient Preference , Telemedicine , Humans , Telemedicine/methods , Female , Male , Middle Aged , Adult , Aged , Video Recording , Surveys and Questionnaires
6.
Article En | MEDLINE | ID: mdl-38722725

Utilization of hand-tracking cameras, such as Leap, for hand rehabilitation and functional assessments is an innovative approach to providing affordable alternatives for people with disabilities. However, prior to deploying these commercially-available tools, a thorough evaluation of their performance for disabled populations is necessary. In this study, we provide an in-depth analysis of the accuracy of Leap's hand-tracking feature for both individuals with and without upper-body disabilities for common dynamic tasks used in rehabilitation. Leap is compared against motion capture with conventional techniques such as signal correlations, mean absolute errors, and digit segment length estimation. We also propose the use of dimensionality reduction techniques, such as Principal Component Analysis (PCA), to capture the complex, high-dimensional signal spaces of the hand. We found that Leap's hand-tracking performance did not differ between individuals with and without disabilities, yielding average signal correlations between 0.7-0.9. Both low and high mean absolute errors (between 10-80mm) were observed across participants. Overall, Leap did well with general hand posture tracking, with the largest errors associated with the tracking of the index finger. Leap's hand model was found to be most inaccurate in the proximal digit segment, underestimating digit lengths with errors as high as 18mm. Using PCA to quantify differences between the high-dimensional spaces of Leap and motion capture showed that high correlations between latent space projections were associated with high accuracy in the original signal space. These results point to the potential of low-dimensional representations of complex hand movements to support hand rehabilitation and assessment.


Hand , Principal Component Analysis , Video Recording , Humans , Hand/physiology , Male , Female , Adult , Disabled Persons/rehabilitation , Middle Aged , Reproducibility of Results , Young Adult , Algorithms , Movement/physiology
7.
BMC Med Educ ; 24(1): 531, 2024 May 14.
Article En | MEDLINE | ID: mdl-38741079

BACKGROUND: An urgent need exists for innovative surgical video recording techniques in head and neck reconstructive surgeries, particularly in low- and middle-income countries where a surge in surgical procedures necessitates more skilled surgeons. This demand, significantly intensified by the COVID-19 pandemic, highlights the critical role of surgical videos in medical education. We aimed to identify a straightforward, high-quality approach to recording surgical videos at a low economic cost in the operating room, thereby contributing to enhanced patient care. METHODS: The recording was comprised of six head and neck flap harvesting surgeries using GoPro or two types of digital cameras. Data were extracted from the recorded videos and their subsequent editing process. Some of the participants were subsequently interviewed. RESULTS: Both cameras, set at 4 K resolution and 30 frames per second (fps), produced satisfactory results. The GoPro, worn on the surgeon's head, moves in sync with the surgeon, offering a unique first-person perspective of the operation without needing an additional assistant. Though cost-effective and efficient, it lacks a zoom feature essential for close-up views. In contrast, while requiring occasional repositioning, the digital camera captures finer anatomical details due to its superior image quality and zoom capabilities. CONCLUSION: Merging these two systems could significantly advance the field of surgical video recording. This innovation holds promise for enhancing technical communication and bolstering video-based medical education, potentially addressing the global shortage of specialized surgeons.


COVID-19 , Video Recording , Humans , COVID-19/epidemiology , Plastic Surgery Procedures/education , Surgical Flaps , SARS-CoV-2 , Head/surgery , Neck/surgery
9.
PeerJ ; 12: e17091, 2024.
Article En | MEDLINE | ID: mdl-38708339

Monitoring the diversity and distribution of species in an ecosystem is essential to assess the success of restoration strategies. Implementing biomonitoring methods, which provide a comprehensive assessment of species diversity and mitigate biases in data collection, holds significant importance in biodiversity research. Additionally, ensuring that these methods are cost-efficient and require minimal effort is crucial for effective environmental monitoring. In this study we compare the efficiency of species detection, the cost and the effort of two non-destructive sampling techniques: Baited Remote Underwater Video (BRUV) and environmental DNA (eDNA) metabarcoding to survey marine vertebrate species. Comparisons were conducted along the Sussex coast upon the introduction of the Nearshore Trawling Byelaw. This Byelaw aims to boost the recovery of the dense kelp beds and the associated biodiversity that existed in the 1980s. We show that overall BRUV surveys are more affordable than eDNA, however, eDNA detects almost three times as many species as BRUV. eDNA and BRUV surveys are comparable in terms of effort required for each method, unless eDNA analysis is carried out externally, in which case eDNA requires less effort for the lead researchers. Furthermore, we show that increased eDNA replication yields more informative results on community structure. We found that using both methods in conjunction provides a more complete view of biodiversity, with BRUV data supplementing eDNA monitoring by recording species missed by eDNA and by providing additional environmental and life history metrics. The results from this study will serve as a baseline of the marine vertebrate community in Sussex Bay allowing future biodiversity monitoring research projects to understand community structure as the ecosystem recovers following the removal of trawling fishing pressure. Although this study was regional, the findings presented herein have relevance to marine biodiversity and conservation monitoring programs around the globe.


Biodiversity , DNA, Environmental , Environmental Monitoring , DNA, Environmental/analysis , DNA, Environmental/genetics , Animals , Environmental Monitoring/methods , Aquatic Organisms/genetics , Video Recording/methods , Ecosystem , DNA Barcoding, Taxonomic/methods
10.
Brain Behav ; 14(5): e3510, 2024 May.
Article En | MEDLINE | ID: mdl-38715394

BACKGROUND: Multiple system atrophy (MSA) is a neurodegenerative disease that progresses rapidly and has a poor prognosis. This study aimed to assess the value of video oculomotor evaluation (VOE) in the differential diagnosis of MSA and Parkinson's disease (PD). METHODS: In total, 28 patients with MSA, 31 patients with PD, and 30 age- and sex-matched healthy controls (HC) were screened and included in this study. The evaluation consisted of a gaze-holding test, smooth pursuit eye movement (SPEM), random saccade, and optokinetic nystagmus (OKN). RESULTS: The MSA and PD groups had more abnormalities and decreased SPEM gain than the HC group (64.29%, 35.48%, 10%, p < .001). The SPEM gain in the MSA group was significantly lower than that in the PD group at specific frequencies. Patients with MSA and PD showed prolonged latencies in all saccade directions compared with those with HC. However, the two diseases had no significant differences in the saccade parameters. The OKN gain gradually decreased from the HC to the PD and the MSA groups (p < .05). Compared with the PD group, the gain in the MSA group was further decreased in the OKN test at 30°/s (Left, p = .010; Right p = .016). Receiver operating characteristic curves showed that the combination of oculomotor parameters with age and course of disease could aid in the differential diagnosis of patients with MSA and PD, with a sensitivity of 89.29% and a specificity of 70.97%. CONCLUSIONS: The combination of oculomotor parameters and clinical data may aid in the differential diagnosis of MSA and PD. Furthermore, VOE is vital in the identification of neurodegenerative diseases.


Multiple System Atrophy , Parkinson Disease , Saccades , Humans , Multiple System Atrophy/diagnosis , Multiple System Atrophy/physiopathology , Parkinson Disease/diagnosis , Parkinson Disease/physiopathology , Male , Diagnosis, Differential , Female , Middle Aged , Aged , Saccades/physiology , Video Recording , Nystagmus, Optokinetic/physiology , Pursuit, Smooth/physiology
11.
Sci Rep ; 14(1): 10579, 2024 05 08.
Article En | MEDLINE | ID: mdl-38720014

The complex dynamics of animal manoeuvrability in the wild is extremely challenging to study. The cheetah (Acinonyx jubatus) is a perfect example: despite great interest in its unmatched speed and manoeuvrability, obtaining complete whole-body motion data from these animals remains an unsolved problem. This is especially difficult in wild cheetahs, where it is essential that the methods used are remote and do not constrain the animal's motion. In this work, we use data obtained from cheetahs in the wild to present a trajectory optimisation approach for estimating the 3D kinematics and joint torques of subjects remotely. We call this approach kinetic full trajectory estimation (K-FTE). We validate the method on a dataset comprising synchronised video and force plate data. We are able to reconstruct the 3D kinematics with an average reprojection error of 17.69 pixels (62.94% PCK using the nose-to-eye(s) length segment as a threshold), while the estimates produce an average root-mean-square error of 171.3N ( ≈ 17.16% of peak force during stride) for the estimated ground reaction force when compared against the force plate data. While the joint torques cannot be directly validated against ground truth data, as no such data is available for cheetahs, the estimated torques agree with previous studies of quadrupeds in controlled settings. These results will enable deeper insight into the study of animal locomotion in a more natural environment for both biologists and roboticists.


Acinonyx , Acinonyx/physiology , Animals , Biomechanical Phenomena , Imaging, Three-Dimensional , Locomotion/physiology , Torque , Video Recording
12.
Sci Rep ; 14(1): 10560, 2024 05 08.
Article En | MEDLINE | ID: mdl-38720020

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Algorithms , Data Compression , Video Recording , Humans , Data Compression/methods , Human Activities , Deep Learning , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods
13.
J Exp Biol ; 227(9)2024 Apr 15.
Article En | MEDLINE | ID: mdl-38722696

Animals deliver and withstand physical impacts in diverse behavioral contexts, from competing rams clashing their antlers together to archerfish impacting prey with jets of water. Though the ability of animals to withstand impact has generally been studied by focusing on morphology, behaviors may also influence impact resistance. Mantis shrimp exchange high-force strikes on each other's coiled, armored telsons (tailplates) during contests over territory. Prior work has shown that telson morphology has high impact resistance. I hypothesized that the behavior of coiling the telson also contributes to impact energy dissipation. By measuring impact dynamics from high-speed videos of strikes exchanged during contests between freely moving animals, I found that approximately 20% more impact energy was dissipated by the telson as compared with findings from a prior study that focused solely on morphology. This increase is likely due to behavior: because the telson is lifted off the substrate, the entire body flexes after contact, dissipating more energy than exoskeletal morphology does on its own. While variation in the degree of telson coil did not affect energy dissipation, proportionally more energy was dissipated from higher velocity strikes and from strikes from more massive appendages. Overall, these findings show that analysis of both behavior and morphology is crucial to understanding impact resistance, and suggest future research on the evolution of structure and function under the selective pressure of biological impacts.


Crustacea , Animals , Biomechanical Phenomena , Crustacea/physiology , Crustacea/anatomy & histology , Energy Metabolism , Predatory Behavior/physiology , Behavior, Animal/physiology , Video Recording
14.
J Exp Biol ; 227(9)2024 Apr 15.
Article En | MEDLINE | ID: mdl-38726757

Differences in the physical and behavioral attributes of prey are likely to impose disparate demands of force and speed on the jaws of a predator. Because of biomechanical trade-offs between force and speed, this presents an interesting conundrum for predators of diverse prey types. Loggerhead shrikes (Lanius ludovicianus) are medium-sized (∼50 g) passeriform birds that dispatch and feed on a variety of arthropod and vertebrate prey, primarily using their beaks. We used high-speed video of shrikes biting a force transducer in lateral view to obtain corresponding measurements of bite force, upper and lower bill linear and angular displacements, and velocities. Our results show that upper bill depression (about the craniofacial hinge) is more highly correlated with bite force, whereas lower bill elevation is more highly correlated with jaw-closing velocity. These results suggest that the upper and lower jaws might play different roles for generating force and speed (respectively) in these and perhaps other birds as well. We hypothesize that a division of labor between the jaws may allow shrikes to capitalize on elements of force and speed without compromising performance. As expected on theoretical grounds, bite force trades-off against jaw-closing velocity during the act of biting, although peak bite force and jaw-closing velocity across individual shrikes show no clear signs of a force-velocity trade-off. As a result, shrikes appear to bite with jaw-closing velocities and forces that maximize biting power, which may be selectively advantageous for predators of diverse prey that require both jaw-closing force and speed.


Bite Force , Jaw , Animals , Biomechanical Phenomena , Jaw/physiology , Passeriformes/physiology , Predatory Behavior/physiology , Beak/physiology , Video Recording
15.
IEEE Trans Image Process ; 33: 3256-3270, 2024.
Article En | MEDLINE | ID: mdl-38696298

Video-based referring expression comprehension is a challenging task that requires locating the referred object in each video frame of a given video. While many existing approaches treat this task as an object-tracking problem, their performance is heavily reliant on the quality of the tracking templates. Furthermore, when there is not enough annotation data to assist in template selection, the tracking may fail. Other approaches are based on object detection, but they often use only one adjacent frame of the key frame for feature learning, which limits their ability to establish the relationship between different frames. In addition, improving the fusion of features from multiple frames and referring expressions to effectively locate the referents remains an open problem. To address these issues, we propose a novel approach called the Multi-Stage Image-Language Cross-Generative Fusion Network (MILCGF-Net), which is based on one-stage object detection. Our approach includes a Frame Dense Feature Aggregation module for dense feature learning of adjacent time sequences. Additionally, we propose an Image-Language Cross-Generative Fusion module as the main body of multi-stage learning to generate cross-modal features by calculating the similarity between video and expression, and then refining and fusing the generated features. To further enhance the cross-modal feature generation capability of our model, we introduce a consistency loss that constrains the image-language similarity and language-image similarity matrices during feature generation. We evaluate our proposed approach on three public datasets and demonstrate its effectiveness through comprehensive experimental results.


Algorithms , Image Processing, Computer-Assisted , Video Recording , Video Recording/methods , Image Processing, Computer-Assisted/methods , Humans
16.
Sci Rep ; 14(1): 10225, 2024 05 03.
Article En | MEDLINE | ID: mdl-38702374

This study aimed to analyze the effect of laterality and instructional video on the soccer goalkeepers' dive kinematics in penalty. Eight goalkeepers from youth categories (U15, U17, U20) were randomly divided into control (CG) and video instruction groups (VG). The latter performed 20 penalty defense trials on the field with balls launched by a machine, ten before and after watching a video instruction to improve the diving kinematics. The CG only performed the dives. Three cameras recorded the collections. A markerless motion capture technique (OpenPose) was used for identification and tracking of joints and anatomical references on video. The pose data were used for 3D reconstruction. In the post-instruction situation, the VG presented differences in comparison to the CG in the: knee flexion/extension angle, time to reach peak resultant velocity, frontal step distance, and frontal departure angle, which generated greater acceleration during the dive. Non-dominant leg side dives had higher resultant velocity during 88.4 - 100% of the diving cycle, different knee flexion/extension angle, and higher values ​​in the frontal step distance. The instructional video generated an acute change in the diving movement pattern of young goalkeepers when comparing the control and the video instruction group in the post condition.


Soccer , Video Recording , Humans , Soccer/physiology , Biomechanical Phenomena , Adolescent , Male , Athletic Performance/physiology , Functional Laterality/physiology
17.
Article En | MEDLINE | ID: mdl-38717248

A video can help highlight the real-time steps, anatomy and the technical aspects of a case that may be difficult to convey with text or static images alone. Editing with a regimented workflow allows for the transmission of only essential information to the viewer while maximizing efficiency by going through the editing process. This video tutorial breaks down the fundamentals of surgical video editing with tips and pointers to simplify the workflow.


Video Recording , Humans , Surgical Procedures, Operative/methods , Workflow
18.
Sci Rep ; 14(1): 9971, 2024 04 30.
Article En | MEDLINE | ID: mdl-38693325

Sociopositive interactions with conspecifics are essential for equine welfare and quality of life. This study aimed to validate the use of wearable ultra-wideband (UWB) technology to quantify the spatial relationships and dynamics of social behaviour in horses by continuous (1/s) measurement of interindividual distances. After testing the UWB devices' spatiotemporal accuracy in a static environment, the UWB measurement validity, feasibility and utility under dynamic field conditions was assessed in a group of 8 horses. Comparison of the proximity measurements with video surveillance data established the measurement accuracy and validity (r = 0.83, p < 0.0001) of the UWB technology. The utility for social behaviour research was demonstrated by the excellent accordance of affiliative relationships (preferred partners) identified using UWB with video observations. The horses remained a median of 5.82 m (95% CI 5.13-6.41 m) apart from each other and spent 20% (median, 95% CI 14-26%) of their time in a distance ≤ 3 m to their preferred partner. The proximity measurements and corresponding speed calculation allowed the identification of affiliative versus agonistic approaches based on differences in the approach speed and the distance and duration of the resulting proximity. Affiliative approaches were statistically significantly slower (median: 1.57 km/h, 95% CI 1.26-1.92 km/h, p = 0.0394) and resulted in greater proximity (median: 36.75 cm, 95% CI 19.5-62 cm, p = 0.0003) to the approached horse than agonistic approaches (median: 3.04 km/h, 95% CI 2.16-3.74 km/h, median proximity: 243 cm, 95% CI 130-319 cm), which caused an immediate retreat of the approached horse at a significantly greater speed (median: 3.77 km/h, 95% CI 3.52-5.85 km/h, p < 0.0001) than the approach.


Behavior, Animal , Social Behavior , Animals , Horses , Male , Female , Wearable Electronic Devices , Video Recording
19.
BMC Public Health ; 24(1): 1216, 2024 May 02.
Article En | MEDLINE | ID: mdl-38698404

BACKGROUND: Acute pancreatitis (AP) is a common acute digestive system disorder, with patients often turning to TikTok for AP-related information. However, the platform's video quality on AP has not been thoroughly investigated. OBJECTIVE: The main purpose of this study is to evaluate the quality of videos about AP on TikTok, and the secondary purpose is to study the related factors of video quality. METHODS: This study involved retrieving AP-related videos from TikTok, determining, and analyzing them based on predefined inclusion and exclusion criteria. Relevant data were extracted and compiled for evaluation. Video quality was scored using the DISCERN instrument and the Health on the Net (HONcode) score, complemented by introducing the Acute Pancreatitis Content Score (APCS). Pearson correlation analysis was used to assess the correlation between video quality scores and user engagement metrics such as likes, comments, favorites, retweets, and video duration. RESULTS: A total of 111 TikTok videos were included for analysis, and video publishers were composed of physicians (89.18%), news media organizations (13.51%), individual users (5.41%), and medical institutions (0.9%). The majority of videos focused on AP-related educational content (64.87%), followed by physicians' diagnostic and treatment records (15.32%), and personal experiences (19.81%). The mean scores for DISCERN, HONcode, and APCS were 33.05 ± 7.87, 3.09 ± 0.93, and 1.86 ± 1.30, respectively. The highest video scores were those posted by physicians (35.17 ± 7.02 for DISCERN, 3.31 ± 0.56 for HONcode, and 1.94 ± 1.34 for APCS, respectively). According to the APCS, the main contents focused on etiology (n = 55, 49.5%) and clinical presentations (n = 36, 32.4%), followed by treatment (n = 24, 21.6%), severity (n = 20, 18.0%), prevention (n = 19, 17.1%), pathophysiology (n = 17, 15.3%), definitions (n = 13, 11.7%), examinations (n = 10, 9%), and other related content. There was no correlation between the scores of the three evaluation tools and the number of followers, likes, comments, favorites, and retweets of the video. However, DISCERN (r = 0.309) and APCS (r = 0.407) showed a significant positive correlation with video duration, while HONcode showed no correlation with the duration of the video. CONCLUSIONS: The general quality of TikTok videos related to AP is poor; however, the content posted by medical professionals shows relatively higher quality, predominantly focusing on clinical presentations and etiologies. There is a discernible correlation between video duration and quality ratings, indicating that a combined approach incorporating the guideline can comprehensively evaluate AP-related content on TikTok.


Pancreatitis , Video Recording , Humans , Pancreatitis/therapy , Pancreatitis/diagnosis , Reproducibility of Results , Acute Disease , Social Media
20.
BMC Vet Res ; 20(1): 172, 2024 May 03.
Article En | MEDLINE | ID: mdl-38702691

BACKGROUND: Lameness examinations are commonly performed in equine medicine. Advancements in digital technology have increased the use of video recordings for lameness assessment, however, standardization of ideal video angle is not available yielding videos of poor diagnostic quality. The objective of this study was to evaluate the effect of video angle on the subjective assessment of front limb lameness. A randomized, blinded, crossover study was performed. Six horses with and without mechanically induced forelimb solar pain were recorded using 9 video angles including horses trotting directly away and towards the video camera, horses trotting away and towards a video camera placed to the left and right side of midline, and horses trotting in a circle with the video camera placed on the inside and outside of the circle. Videos were randomized and assessed by three expert equine veterinarians using a 0-5 point scoring system. Objective lameness parameters were collected using a body-mounted inertial sensor system (Lameness Locator®, Equinosis LLC). Interobserver agreement for subjective lameness scores and ease of grading scores were determined. RESULTS: Induction of lameness was successful in all horses. There was excellent agreement between objective lameness parameters and subjective lameness scores (AUC of the ROC = 0.87). For horses in the "lame" trials, interobserver agreement was moderate for video angle 2 when degree of lameness was considered and perfect for video angle 2 and 9 when lameness was considered as a binary outcome. All other angles had no to fair agreement. For horses in the "sound" trials, interobserver agreement was perfect for video angle 5. All other video angles had slight to moderate agreement. CONCLUSIONS: When video assessment of forelimb lameness is required, a video of the horse trotting directly towards the video camera at a minimum is recommended. Other video angles may provide supportive information regarding lameness characteristics.


Cross-Over Studies , Horse Diseases , Lameness, Animal , Video Recording , Animals , Horses , Lameness, Animal/diagnosis , Horse Diseases/diagnosis , Forelimb , Female , Male
...